Journal of Neurology, Neurosurgery & Psychiatry
● BMJ
Preprints posted in the last 7 days, ranked by how well they match Journal of Neurology, Neurosurgery & Psychiatry's content profile, based on 29 papers previously published here. The average preprint has a 0.05% match score for this journal, so anything above that is already an above-average fit.
Morrin, H.; Badenoch, J. B.; Burchill, E.; Fayosse, A.; Singh-Manoux, A.; Shotbolt, P.; Zandi, M. S.; David, A. S.; Lewis, G.; Rogers, J. P.
Show abstract
Background: Depression is associated with an increased risk of subsequent Parkinson's disease. Neuroimaging studies suggest a neurobiological overlap in mechanisms underlying Parkinson's disease and psychomotor retardation in depression. Our aim was to investigate whether, among individuals with depression, the presence of psychomotor retardation was associated with the development of subsequent Parkinson's disease. Methods: In a retrospective cohort study, electronic healthcare records from individuals diagnosed with depression at age 40 or over in a large mental health service in London, UK were examined for the presence of psychomotor retardation. Linkage to general hospital records was used to ascertain diagnoses of Parkinson's disease between 2007 and 2023. Cox regression was used to compare the hazard of Parkinson's disease in individuals with depression with and without psychomotor retardation. Results: Among 6327 patients with depression, 2402 (38.0%) had psychomotor retardation. The adjusted hazard ratio for development of Parkinson's in those with psychomotor retardation was 1.43 (95% CI 1.02 - 2.01, p = 0.04). Secondary analyses demonstrated a significant difference in psychomotor retardation incidence at least 10 years before Parkinson's diagnosis. Conclusions: Psychomotor retardation in later-life depression is associated with increased risk of subsequent Parkinson's diagnosis over an extended period of time, suggesting that the relationship cannot solely be explained by misdiagnosis. Psychomotor retardation may therefore serve as a marker of prodromal Parkinson's disease.
Bombaci, A.; Iadarola, A.; Giraudo, A.; Fattori, E.; Sinagra, S.; Magnino, A.; Calvo, A.; Chio', A.; Cicolin, A.
Show abstract
Background: Sleep wake and circadian disturbances are increasingly recognised in people living with amyotrophic lateral sclerosis (plwALS), but endogenous circadian phase timing and its prognostic significance in early disease remain unclear. We assessed whether salivary dim-light melatonin onset (DLMO), an objective marker of central circadian phase, is altered in early plwALS and whether it provides prognostic information. Methods: In this prospective longitudinal observational study, plwALS within 18 months of symptom onset underwent home-based salivary melatonin sampling under dim light conditions at six predefined time points around habitual sleep onset (HSO). Melatonin profiles were modeled using cubic smoothing splines, and DLMO was defined as the first time the fitted curve reached 3 pg/mL. Clinical, respiratory, and sleep assessments were collected at baseline (T0) and after 6 months (T6); a subgroup repeated saliva sampling at T6. Age and sex matched controls underwent melatonin profiling. Associations with disease progression, incident respiratory symptoms, and survival/tracheostomy were examined using regressions and survival analyses. Results: Fifty plwALS were enrolled. Compared with controls, plwALS showed an earlier DLMO (20:24 vs 20:58; p=0.028) despite similar HSO and chronotype. Within ALS cohort, a later baseline DLMO correlated with worse functional/motor status, faster progression of disease, incident dyspnea/orthopnea by T6 (adjusted OR 3.02; p=0.017), and poorer survival/tracheostomy-free outcome. In re-sampled subgroup (n=28), DLMO and other melatonin-derived metrics did not change over 6 months. Conclusions: Circadian phase alterations are detectable in early ALS. Baseline DLMO may represent a non-invasive prognostic biomarker for progression, respiratory symptom emergence and survival, warranting validation in larger multicentre cohorts.
Bovis, F.; Montobbio, N.; Signori, A.; Kalincik, T.; Arnold, D. L.; Tintore, M.; Kappos, L.; Sormani, M. P.
Show abstract
Disability worsening is the critical long-term outcome in multiple sclerosis, yet the Expanded Disability Status Scale incompletely captures neurological deterioration and has limited sensitivity in the short time windows of clinical trials. Composite endpoints incorporating functional measures have been proposed to address these limitations, but whether they reliably improve detection of treatment effects has not been established across trials. We conducted a post-hoc analysis of individual patient data from ten phase III randomised controlled trials (ASCEND, BRAVO, CONFIRM, DEFINE, EXPAND, INFORMS, OLYMPUS, OPERA I/II, and ORATORIO; n = 9,369), spanning relapsing-remitting and progressive multiple sclerosis. Confirmed disability worsening was defined using harmonised criteria with the msprog package and confirmed at 24 weeks. Treatment effects were estimated using Cox proportional hazards models and combined across trials in a one-stage individual patient data framework. Composite endpoints were constructed from the Expanded Disability Status Scale, the timed 25-foot walk test, and the nine-hole peg test using logical unions (OR-type), intersections (AND-type), and majority-vote structures. Sensitivity to treatment effect was quantified using Z-scores (the ratio of the pooled log-hazard ratio to its standard error) and compared to the Expanded Disability Status Scale reference using interaction tests. Event rates varied across components: the timed walk test generated the highest rates (up to 46.8%) while the nine-hole peg test generated the lowest (as low as 2.1%). OR-type composite endpoints showed weaker treatment effects than the Expanded Disability Status Scale alone, with the largest reductions in sensitivity observed for endpoints incorporating the timed walk test ({Delta}Z up to +2.26; interaction p = 0.004). These findings were confirmed across disease subtypes and were pronounced in relapsing-remitting trials, where no composite endpoint outperformed the Expanded Disability Status Scale. In progressive multiple sclerosis, the combination of the Expanded Disability Status Scale and the nine-hole peg test showed numerically stronger treatment effects ({Delta}Z = -1.65), though interaction tests did not reach statistical significance (p = 0.051). Composite endpoints do not systematically improve treatment effect detection in multiple sclerosis trials. Increased event capture driven by the timed walk test introduces noise that dilutes the treatment signal rather than amplifying it, highlighting that event rate and endpoint quality are not interchangeable. Upper limb function assessed by the nine-hole peg test provides complementary and specific information, particularly in progressive disease. The combination of global disability and upper limb measures represents a promising direction for future endpoint development in progressive multiple sclerosis trials, warranting validation.
Law, S. Y. R.; Mukadam, N.; Pourhadi, N.; Chaudry, A.; Shiakalli, A.; Rai, U.; Livingston, G.
Show abstract
ObjectiveTo examine whether menopausal women who initiate systemic menopausal hormone therapy (MHT) around menopause (45-60 years old) have a different risk of developing dementia than those not taking MHT. DesignSystematic review and meta-analysis of randomised controlled trials and longitudinal observational studies. Risk of bias was assessed using ROB-2 and ROBINS I-V2. Data sourcesMEDLINE, Web of Science, EMBASE, and Cochrane Library to 27 March 2026. Eligibility criteria for selecting studiesStudies which measured dementia or cognitive decline in women who initiated systemic MHT between ages 45-60 or within 5 years of menopause, compared with placebo or no MHT. Authors contacted for additional details if needed. Main outcome measuresDementia, Alzheimers disease (AD), cognitive decline. Results10 studies totalling 213,678 participants (189,525 in studies with the primary population). There was no significant increased risk in women with a uterus for all cause dementia (pooled hazard ratio (HR): 1.12; 95% CI 0.91-1.31, N=78,613, I2 = 96.9%), but increased AD risk (HR: 1.14; 95% CI 1.02, 1.29, N=134,865, I2 = 35.6%). Results were similar in sensitivity analyses including women with or without a uterus. Results for cognitive decline were variable. ConclusionsMHT initiated around the age of menopause should not be prescribed for cognition or dementia prevention. It is not protective against dementia and may increase risk slightly. The magnitude of risk was similar in AD and dementia, but the latter with larger confidence intervals. Studies which followed up individuals rather than on health records lost people to follow up. This may account for difference in cognitive decline outcomes between studies, as people with cognitive impairment and dementia are more likely not to attend. MHT prescribing should balance benefits against risks, including evidence of a small increased dementia risk. There are few high-quality studies, so further research would inform recommendations. Systematic review registration Prospero CRD420251010663 What is already known on this topic?O_LIMenopausal hormone therapy (MHT) is effective for alleviating vasomotor symptoms. Contemporary guidelines recommend treatment should be initiated for such symptoms under age 60 and or within 10 years of menopause onset. C_LIO_LIA large randomised trial on the topic found increased risk of dementia in women initiating MHT after the age of 65. C_LIO_LIIt is unknown whether initiating MHT around the age of menopause impacts the risk of dementia or cognitive decline. C_LI What this study addsO_LIThere was no evidence that taking MHT around the time of menopause decreases the risk of dementia or cognitive impairment. C_LIO_LIThey should not be prescribed for these indications. C_LIO_LIWe were able to find more studies which examine this question by contacting authors for additional data. C_LIO_LIInitiating MHT in women with a uterus around the age of menopause increased the risk of Alzheimers disease slightly, by over 10%, and there is a similar but not significant effect in the fewer studies of all cause dementia. Women with or without a uterus show similar results. C_LIO_LIWe found no significant difference shown in cognitive decline, possibly due to loss to follow up. This may be because most studies of cognitive decline follow up C_LI
Graure, M.; Nierobisch, N.; De Vere-Tyndall, A. J.; Pakeerathan, T.; Ayzenberg, I.; Gernert, J.; Havla, J.; Ringelstein, M.; Aktas, O.; Tkachenko, D.; Huemmert, M.; Trebst, C.; Cedra Fuertes, N. A.; Papadopoulou, A.; Giglhuber, K.; Wicklein, R.; Berthele, A.; Weller, M.; Kana, V.; Roth, P.; Herwerth, M.
Show abstract
BackgroundChronic relapsing inflammatory optic neuropathy (CRION) is a steroid-dependent form of optic neuritis with incompletely understood pathophysiology. The identification of myelin oligodendrocyte glycoprotein antibodies (MOG-IgG) in a substantial patient subset has challenged the diagnostic and therapeutic management. The aim of this study was to investigate clinical profiles and treatment outcomes of patients with CRION, comparing MOG-IgG-positive (MOG+) and seronegative (MOG-) subgroups. MethodsPatients from six European tertiary centers fulfilling diagnostic criteria for CRION were included. All underwent cell-based autoantibody testing. Clinical outcomes (visual acuity, annualized relapse rate), laboratory and imaging findings (MRI, OCT), and treatment responses were retrospectively analyzed. ResultsSixty patients were included (median age 33 years; 70% female); 27 (45%) were MOG+. MOG+ CRION was associated with later onset, higher ARR before treatment (median [IQR] 2 [1-3] vs. 1 [1-2], p = 0.023), and a trend toward shorter inter-relapse intervals. Additional distinguishing features included higher frequencies of antinuclear antibody positivity, elevated CSF interleukin-6, and extensive optic neuritis on MRI. Relapse burden correlated with visual acuity decline and retinal thinning. In MOG+ patients, monoclonal antibody therapy reduced the ARR (n = 21; 2 [1-3] vs. 0 [0-2], p = 0.024), primarily driven by tocilizumab (n = 11; 2 [1-3] vs. 0 [0-1], p = 0.023). In MOG-patients, rituximab and azathioprine showed a trend toward ARR reduction. ConclusionCRION represents a heterogeneous syndrome encompassing distinct subgroups. MOG+ patients demonstrate higher disease activity but respond favorably to tocilizumab. Serological testing is critical for treatment stratification and preventing relapses.
Nicolai, E. N.; Sieradzan, K.; Schijns, O.; Fry, M. P.; Rijkers, K.; Verner, R.; Baeesa, S. S.; Kurwale, N.; Giannicola, G.; Gordon, C.; Moon, A.; Beraldi, F.; Sen, A.; Mays, D. A.
Show abstract
ObjectiveVagus nerve stimulation (VNS) is an established neuromodulation therapy used in the management of drug-resistant epilepsy (DRE), or when other intracranial surgical modalities have not reduced seizure burden. We evaluated whether prior intracranial surgery for epilepsy influences safety and effectiveness outcomes with adjunctive VNS, using real-world data from the CORE-VNS study. MethodsCORE-VNS (NCT03529045), a prospective, multicenter, international observational study, was designed to collect data on seizure and non-seizure outcomes in patients with DRE treated with VNS. Participants were identified as having or not having undergone prior intracranial brain surgery for epilepsy (ICSE) and received an initial VNS implant. Baseline seizure frequency data and patient-reported outcome measures were collected at 3, 6, 12, 24, and 36 months. This analysis compared the baseline data for VNS therapy and safety outcomes at 36 months. ResultsAmong 531 participants implanted with VNS, prior ICSE was performed in 84. Median percentage seizure reductions at 36 months for all seizures (76.6% and 76.3%), all focal seizures (83.3% and 71.8%), and all generalized seizures (77.8% and 76.2%) were found to be similar between those without and with a history of ICSE, respectively. The 50% responder rate for all seizures reported at baseline was similar, 64.8% and 61.8%, in both groups and complete seizure freedom was reported by 17.9% and 8.8%, respectively. Implant-related adverse events (AE) and serious AE rates were similar between groups. ConclusionVNS was associated with clinically meaningful seizure reductions and showed a consistent safety profile irrespective of the history of ICSE. Prior ICSE should not be a contraindication to the consideration of VNS.
Kmiecik, M. J.; O'Brien, L.; Szpyhulsky, M.; Iodice, V.; Freeman, R.; Jordan, J.; Biaggioni, I.; Kaufmann, H.; Vickery, R.; Miller, A.; Saunders, E.; Rushton, E.; Valle, L.; Norcliffe-Kaufmann, L.
Show abstract
BackgroundAlthough neurogenic orthostatic hypotension (nOH) is a common and debilitating feature of multiple system atrophy (MSA), little is known about the burden of symptoms in the real world. ObjectivesTo design and conduct a cross-sectional community-based research survey targeting patients with MSA, with and without nOH. MethodsWe recruited patients with MSA to complete an anonymous online survey covering three core themes: 1) timely diagnosis, 2) nOH pharmacotherapy and refractory symptoms, and 3) confidence in physician knowledge. Responses were grouped by pre-specified diagnostic certainty levels. Relationships between symptoms, function, and pharmacotherapy were assessed using univariate and multivariate methods. ResultsWe analyzed 259 respondents with a self-reported diagnosis of MSA (age: M=64.38, SD=8.09 years; 44% female). In total, 42% also had a diagnosis nOH; 40% had symptoms highly suspicious of nOH, but no diagnosis; and 21% reported having never had their blood pressure measured in the standing position at a clinical visit. Treatment with a pressor agent was independently associated with the presence of other symptoms of autonomic failure. Each additional nOH symptom reported increased the odds of requiring pharmacotherapy by 18%. Yet, despite anti-hypotensive medication use, 97% of patients reported limitations in their ability to bathe, cook, or arise from a chair/bed with 76% needing caregiver support for refractory nOH symptoms. ConclusionsThis cross-sectional representative sample shows nOH is underrecognized and undertreated in MSA patients, leading to substantial functional limitations. It is our hope that these findings are leveraged for planning future trials and advocating for better treatments.
Jansen, C.; Stalter, J.; Reuter, S.; Witt, K.
Show abstract
BackgroundAccelerated long-term forgetting (ALF), defined as an increased rate of memory loss over extended intervals, has so far been detected in a pilot study of patients with mild multiple sclerosis (MS). This study aimed to (I) confirm the presence of ALF in a larger, heterogeneous MS sample, (II) explore associations with patient-reported outcomes, and (III) assess the diagnostic performance of ALF tests for subjective memory impairment. MethodsThis study compared 62 MS patients and 65 age-, sex-, and education-matched healthy controls using standardized memory tests (RAVLT, WMS-IV Logical Memory subtest). Recall was assessed immediately, after 30 minutes, and after 7 days. Seven-day/30-minute recall ratios (QRAVLT, QWMS) served as primary outcomes. Self-report measures included memory complaints, fatigue, depression, and sleep disturbances. Linear regression and Receiver operating characteristic (ROC) analyses assessed predictors and diagnostic accuracy. ResultsALF was observed in multiple sclerosis since QRAVLT was lower in patients than in controls (0.64 [95% CI 0.59-0.69] vs. 0.78 [0.73-0.82], p < 0.001), as was QWMS (0.79 [95% CI 0.74-0.84] vs. 0.95 [0.90-1.00], p < 0.001), despite comparable initial learning. Greater fatigue, higher memory complaints, longer disease duration, older age, and greater disability were associated with lower ALF scores. The combined ALF score moderately discriminated subjective memory impairment (AUC 0.74; sensitivity 0.73; specificity 0.73). ConclusionMS patients showed ALF despite normal initial learning, indicating a specific memory deficit undetected by standard tests. Long-delay recall using RAVLT and WMS-IV Logical Memory subtest may improve cognitive impairment detection in MS.
Soto-Ferndandez, P.; Toledo-Rodriguez, L.; Figueroa-Vargas, A.; Figueroa-Taiba, P.; Billeke, P.
Show abstract
Background: Cognitive impairment poses a significant challenge to healthcare systems worldwide, impacting patient autonomy, social participation, and quality of life, while placing a considerable burden on caregivers. Non pharmacological interventions, particularly cognitive training and non invasive brain stimulation, have emerged as promising therapeutic strategies. Objective: This study aims to quantify the synergistic effects of transcranial direct current stimulation (tDCS) with cognitive training on cognitive function across a spectrum of pathologies that induce cognitive impairment. Methods: We conducted a systematic review and metaanalysis following PRISMA guidelines. We searched PubMed for randomized controlled trials that investigated the effect of combined tDCS and cognitive training compared with cognitive training alone. The analysis was based on the GRADE framework for systematic reviews and metaanalyses. Results: Across 27 studies including 1,012 participants, tDCS combined with cognitive training showed a small effect compared with cognitive training alone (SMD = 0.36, 95% CI: 0.15 0.56). The effect was found only immediately after the intervention and declined during follow-up. Conclusion: tDCS combined with cognitive training may provide a small, short term benefit for cognitive function, but high heterogeneity across studies and loss of effect at follow up underscore the need for larger, better standardized trials to clarify its clinical value.
Polo Sanchez, M.; Lesmes, A. C.; Muni, N.; Vigneault, F.; Novak, R.
Show abstract
Background: Rett Syndrome (RTT) is a severe neurodevelopmental disorder affecting approximately 1 in 10,000 live female births worldwide. The Rett Syndrome Behaviour Questionnaire (RSBQ), remains one of the most widely used standardized behavioral assessment tools for RTT. However, the RSBQ was originally validated only in British English, limiting its applicability for Spanish-speaking caregivers and clinical centers across Latin America and Spain. Objective: The primary aim of this study was to develop and validate the comprehension of the Spanish translation of the RSBQ to ensure cultural and linguistic equivalence, enhance data reliability, and facilitate earlier, more accurate clinical assessments among Spanish-speaking RTT populations. Methods: Surveys were administered in two phases to Spanish-speaking caregivers between November 2023 and September 2025. Phase I consisted of 12 guided survey administrations with participants being able to ask clarifying questions and offer linguistic modifications of RSBQ questions. Phase II consisted of independent online administration of the refined Spanish RSBQ and a retest at least 7 days later. Participants were recruited through direct outreach and supported virtually during questionnaire completion. Results: Following data cleaning and quality control, a total of 51 caregivers successfully completed both surveys. The Spanish RSBQ demonstrated high caregiver comprehension and strong engagement across multiple Latin American countries, including Argentina, Mexico, and Peru. Responses were highly correlated between test and retest timepoints, and no question showed biased response distributions. A slight effect of response interval on test-retest correlation was observed, potentially indicating the impact of natural disease progression confounding retest evaluation for long (>80 day) intervals; however this effect did not impact the overall linguistic validation results as analysis of only <21 day test-retest responders confirmed the findings. Conclusions: This linguistic validation study represents the first formal step toward the clinical validation of the Spanish RSBQ, enabling broader inclusion of Spanish-speaking populations in RTT research. The collaborative, bilingual data collection strategy proved both feasible and effective, paving the way for multinational trials and expanding therapeutic accessibility through localized, patient-centered innovation.
Saha, S.; Georgiou-Karistianis, N.; Teo, V.; Szmulewicz, D. J.; Strike, L. T.; Franca, M. C.; Rezende, T. J.; Harding, I. H.
Show abstract
Background Friedreich ataxia (FRDA) is a rare neurodegenerative disorder with substantial heterogeneity in clinical presentation and progression, complicating prognosis and trial design. Neuroimaging offers objective biomarkers to track disease evolution, yet variability in progression patterns remains poorly understood. Objective To identify biologically meaningful FRDA progression subtypes using longitudinal multimodal MRI and assess their associations with demographic, genetic, and clinical factors. Methods Longitudinal structural and diffusion MRI data from 54 FRDA and 57 controls were analysed. Annualised progression rates of macrostructural (volumetric) and microstructural (diffusion) features across cerebellum, brainstem, and spinal cord regions were clustered using Gaussian Mixture Models. Cluster robustness was assessed using per-cluster Jaccard similarity and other validation metrics. Random Forest classification examined predictors of cluster membership. Results Three reproducible clusters/subtypes emerged: micro-dominant/dual progression, characterised by widespread microstructural deterioration with modest volumetric decline; macro-dominant, marked by pronounced volumetric decline with minimal microstructural change; and minimal/no progression, showing negligible change in all measures. FRDA participants predominated in the first two clusters. Random Forest prediction of cluster membership using clinical and demographic variables identified length of the trinucleotide repeat expansion in the FXN gene as key predictor. Conclusions Data-driven clustering of longitudinal MRI identified distinct FRDA subtypes with unique co-progression patterns, underscoring genetic burden as a key driver. Recognising such heterogeneity can improve patient stratification, enable personalised monitoring, and guide targeted therapeutic strategies. Future studies should validate these subtypes in larger, more diverse cohorts and integrate additional biomarkers for enhanced precision.
Houle, T. T.; Lebowitz, A.; Chtay, I.; Patel, T.; McGeary, D. D.; Turner, D. P.
Show abstract
ImportanceMigraine attacks often occur unpredictably, limiting the ability of individuals to initiate timely preventive or preemptive treatment. Short-term probabilistic forecasting of migraine risk could enable more targeted management strategies. ObjectiveTo externally validate the previously developed Headache Prediction Model (HAPRED-I), evaluate an updated continuously learning model (HAPRED-II), and assess the feasibility and short-term safety of delivering individualized probabilistic migraine forecasts directly to patients. Design, Setting, and ParticipantsProspective 8-week cohort study conducted remotely at two academic medical centers in the United States (Massachusetts General Hospital and Wake Forest Health Sciences) between 2015 and 2019. Adults with recurrent migraine or tension-type headache completed twice-daily electronic diaries. A total of 230 participants contributed 23,335 diary entries across 11,862 participant-days of observation. Main Outcomes and MeasuresOccurrence of a headache attack within 24 hours following each evening diary entry. Model performance was evaluated using discrimination (area under the receiver operating characteristic curve [AUC]) and calibration. ResultsExternal validation of HAPRED-I demonstrated modest discrimination (AUC, 0.59; 95% CI, 0.57-0.61) and poor calibration, with predicted probabilities consistently exceeding observed headache risk. In contrast, the continuously updating HAPRED-II model demonstrated progressive improvement in predictive performance as participant-specific data accumulated. Discrimination increased from an AUC of 0.59 (95% CI, 0.57-0.61) during the first 14 days to 0.66 (95% CI, 0.63-0.70) after the first month, accompanied by improved calibration across predicted risk levels. Over the study period, 6999 individualized forecasts were delivered directly to participants. No evidence suggested that receipt of forecasts was associated with increasing headache frequency or worsening predicted headache risk trajectories. Conclusions and RelevanceA static migraine forecasting model demonstrated limited transportability to new individuals. In contrast, models that continuously update within individuals may improve predictive accuracy over time and enable real-time delivery of personalized migraine risk forecasts. Further work incorporating richer physiologic and contextual predictors will likely be necessary before such systems can reliably guide clinical treatment decisions.
Robertson, J. W.; Adanyeguh, I.; Ashizawa, T.; Bender, B.; Cendes, F.; Coarelli, G.; Deistung, A.; Diciotti, S.; Durr, A.; Faber, J.; Franca, M. C.; Goricke, S. L.; Grisoli, M.; Joers, J. M.; Klockgether, T.; Lenglet, C.; Mariotti, C.; Martinez, A. R.; Marzi, C.; Mascalchi, M.; Nigri, A.; Oz, G.; Paulson, H.; Rakowicz, M. J.; Reetz, K.; Rezende, T. J.; Sarro, L.; Schols, L.; Synofzik, M.; Timmann, D.; Thomopoulos, S. I.; Thompson, P. M.; van de Warrenburg, B.; Hernandez-Castillo, C. R.; Harding, I. H.
Show abstract
Objective: Spinocerebellar ataxia type 1 (SCA1) is a rare, inherited neurodegenerative disease characterised by progressive deterioration of motor and cognitive function. Here, we illustrate the pattern and evolution of brain atrophy in people with SCA1 using a large multisite dataset. Methods: Structural magnetic resonance imaging data from SCA1 (n=152) and healthy control (n=131) participants from seven sites and two consortia were analyzed using voxel-based morphometry. Cross-sectional stratification and correlations were undertaken with ataxia severity and duration to profile disease evolution. Cerebrocerebellar structural covariance analysis was used to understand the relationship between cerebral and cerebellar tissue atrophy. Results: Atrophy in SCA1 first manifests in the lower brainstem and cerebellar white matter (WM), before progressing to the pons, anterior cerebellum, and cerebellar lobule IX. The midbrain and peri-thalamic WM and the remainder of the cerebellar cortex are then affected, with preferential involvement of specific motor and cognitive areas. Finally, degeneration in the striatum and cerebral WM corresponding to the corticospinal tract become apparent. Atrophy and correlations with ataxia severity are most pronounced in the cerebellar WM and pons. Structural covariance analysis showed reduced correlations between cerebellar and cerebral WM volume in SCA1 participants. Interpretation: Cross-sectional stratification of a large SCA1 cohort by ataxia severity indicates a pattern of atrophy spread across the brainstem, cerebellum, and subcortical grey and white matter. Ongoing volume loss throughout the disease course is most evident in a core set of infra-tentorial brain regions. Atrophy of cerebellum spans both motor and cognitive functional zones. Cerebellar degeneration is not directly mirrored by downstream effects in the cerebrum.
Auger, S. D.; Varley, J.; Hargovan, M.; Scott, G.
Show abstract
Background: Current medical large language model (LLM) evaluations largely rely on small collections of cases, whereas rigorous safety testing requires large-scale, diverse, and complex cases with verifiable ground truth. Multiple Sclerosis (MS) provides an ideal evaluation model, with validated diagnostic criteria and numerous paraclinical tests informing differential diagnosis, investigation, and management. Methods: We generated synthetic MS cases with ground-truth labels for diagnosis, localisation, and management. Four frontier LLMs (Gemini 3 Pro/Flash, GPT 5.2/5 mini) were instructed to analyse cases to provide anatomical localisation, differential diagnoses, investigations, and management plans. An automated evaluator compared these outputs to the ground-truth labels. Blinded subspecialty experts validated 70 cases for realism and automated evaluator accuracy. We then evaluated LLM decision-making across 1,000 cases and scaled to 10,000 to characterise rare, catastrophic failures. Results: Subspecialist expert review confirmed 100% synthetic case realism and 99.8% (95% CI 95.5 to 100) automated evaluation accuracy. Across 1,000 generated MS cases, all LLMs successfully included MS in the differential diagnoses for more than 91% cases. However, diagnostic competence did not associate with treatment safety. Gemini 3 models had low rates of clinically appropriate steroid recommendations (Flash: 7.2% 95% CI 5.6 to 8.8; Pro: 15.8% 95% CI 13.6 to 18.1) compared to GPT 5 mini (23.5% 95% CI 20.8 to 26.1), frequently overlooking contraindications like active infection. OpenAI models inappropriately recommended acute intravenous thrombolysis for MS cases (9.6% GPT 5.2; 6.4% GPT 5 mini) compared to below 1% for Gemini models. Expanded evaluation (to 10,000 cases) probed these errors in detail. Thrombolysis was recommended in 10.1% of cases lacking symptom timing information and paradoxically persisted (2.9%) even when symptoms were explicitly documented as more than 14 days old. Conclusion: Automated expert-level evaluation across 10,000 cases characterised artificial intelligence clinical blind spots hitherto invisible to small-scale testing. Massive-scale simulation and automated interrogation should become standard for uncovering serious failures and implementing safety guardrails before clinical deployment exposes patients to risk.
Kiwull, L.; Schmeder, V.; Zenker, M.; Mengual Hinojosa, M.; Perkins, J. R.; Ranea, J.; Kluger, G.; Hartlieb, T.; Pringsheim, M.; von Stuelpnagel, C.; Weghuber, D.; Eschermann, K.
Show abstract
1.PurposeSYNGAP1-related developmental and epileptic encephalopathy (SYNGAP1-DEE) is characterized by high rates of both epilepsy and autism spectrum disorder (ASD). While the clinical spectrum is well-documented, the link between specific seizure semiologies and caregiver-reported autistic behaviors is not well understood. This study analyzed the correlation between ten distinct seizure types, their frequencies, and a caregiver-reported autistic behavior score. MethodClinical data were extracted from the PATRE (PATient-based phenotyping and evaluation of therapy for Rare Epilepsies) Registry for SYNGAP1, in the framework of the EURAS project (Grant No. 101080580, Horizon Europe). This study employed a retrospective cross-sectional analysis of caregiver-reported registry data. Analysis was restricted to an analytic cohort of N=337 participants with complete data for both the epilepsy questionnaire (including epilepsy status, seizure semiology, and peak seizure frequency items) and the behavior questionnaire (from a total N=522 registry participants). Caregiver-reported autistic behaviors were quantified using a standardized caregiver-reported scale (Likert 1-5). Statistical associations were evaluated using the Wilcoxon rank-sum test to compare caregiver-reported autistic behavior scores across different seizure semiologies and Spearmans rank correlation to assess the impact of seizure frequency (9-point scale). ResultsWithin the analytic cohort (N=337), epilepsy was reported in 259 patients. Eyelid myoclonia was the most prevalent semiology, affecting 64.9% (n=168) of the epilepsy-positive group. Atypical absences (n=77) demonstrated the most profound and statistically robust association with higher caregiver-reported autistic behavior scores (FDR-adjusted p = 0.001). Significant associations were also observed for typical absences (n=70, FDR-adjusted p = 0.018), eyelid myoclonia (FDR-adjusted p = 0.018), myoclonic-atonic seizures (n=40, FDR-adjusted p = 0.019), and atonic seizures (n=72, FDR-adjusted p = 0.025). Focal and tonic-clonic seizures showed weaker associations (FDR-adjusted p = 0.026 and p = 0.047, respectively). Crucially, quantitative analysis revealed no significant correlation between ordinal caregiver-reported peak seizure frequency ratings and caregiver-reported autistic behavior scores across all semiologies (e.g., Eyelid Myoclonia: p=0.096; Atypical Absences: p=0.744), indicating no detectable association between peak-frequency ratings and caregiver-reported autistic behavior scores. ConclusionHigher caregiver-reported autistic behavior scores in SYNGAP1-DEE were most strongly associated with the presence of atypical absences, representing a generalized, thalamocortical seizure network dysfunction. In contrast, no detectable association was observed between caregiver-reported autistic behavior scores and the ordinal caregiver-reported peak seizure frequency metric. Atypical absences and related semiologies may serve as clinical "red flags" for increased neurodevelopmental comorbidity severity, regardless of reported peak seizure frequency. Abstract SummaryThis study investigates the relationship between ten seizure semiologies, seizure frequency, and severity of caregiver-reported autistic behaviors in a large-scale international cohort of N=337 patients with SYNGAP1-DEE. We identify a robust association between elevated caregiverreported autistic behavior scores and specific thalamocortical seizure patterns, most prominently atypical absences. Notably, our analysis reveals that this association is independent of seizure frequency, demonstrating no detectable association between this ordinal, caregiver-reported seizure frequency metric and caregiver-reported autistic behaviors.
Azizi, H.; Fereshtehnejad, S.-M.; Moqadam, R.; Dadar, M.; Siderowf, A.; Dagher, A.; Zeighami, Y.
Show abstract
Abstract/SummaryO_ST_ABSBackgroundC_ST_ABSCerebrospinal fluid (CSF) -synuclein seed amplification assay (SAA) has emerged as a diagnostic biomarker for Parkinsons disease (PD) and has been linked to differences in disease severity and progression. However, whether SAA status predicts responsiveness to levodopa remains unknown. We investigated the longitudinal association between SAA status, levodopa responsiveness, dopaminergic denervation, and motor complications in sporadic PD. MethodsIn this longitudinal analysis, PD participants from the Parkinsons Progression Markers Initiative (PPMI) cohort with CSF SAA testing who initiated levodopa treatment were included. SAA- and SAA+ patients were matched on sex, age, and disease duration at treatment initiation. Motor severity was assessed using MDS-UPDRS Part III, with proportional and absolute responsiveness derived from ON and OFF medication states. Motor complications were assessed using MDS-UPDRS Part IV, and dopaminergic dysfunction was quantified using caudate DAT-SPECT. Linear mixed-effects models examined longitudinal differences as a function of SAA status. FindingsIn this analysis, 40 SAA- patients were compared to 183 matched SAA+ patients. SAA+ patients showed a slower rate of ON-state motor progression than SAA- patients (0.87 vs 3.47 points/year; p = 0.01). Consistently, proportional levodopa responsiveness increased over time in SAA+ patients while declining in SAA- patients (p = 0.036). These differences were accompanied by lower caudate DAT binding at treatment initiation in SAA- patients (p = 0.002) and faster dopaminergic decline over time (p = 0.008). Although SAA+ patients had fewer motor complications at treatment initiation, their progression was similar. InterpretationCSF -synuclein SAA status is associated with divergent levodopa response in PD, with SAA+ patients showing sustained and progressively greater motor benefit, while SAA- patients show declining responsiveness. Faster dopaminergic denervation in SAA- patients may underlie this difference. SAA status captures clinically relevant heterogeneity that may inform patient stratification and therapeutic decision-making.
Arrotta, K.; Williams, M.; Thompson, N. R.; Bangen, K. J.; Reyes, A.; Zawar, I.; Punia, V.; Wang, I.; Shih, J. J.; Bekris, L. M.; Ferguson, L.; Almane, D. N.; Jones, J. E.; Hermann, B. P.; Busch, R. M.; McDonald, C. R.
Show abstract
Background and Objectives: Older adults with epilepsy have a 2- to 4-fold increased risk of dementia, including Alzheimer's disease (AD), yet underlying mechanisms remain poorly defined. The NIA-AA classifies AD using amyloid (A), tau (T), and neurodegeneration [(N)] biomarkers. We applied this framework to characterize AT(N) profiles and clinical correlates in epilepsy. Methods: Eighty-four older adults with focal epilepsy (mean age=66.3 years) from the Brain Aging and Cognition in Epilepsy (BrACE) study were classified as A+, T+, and/or (N)+ using plasma {beta}-amyloid (A{beta}) 42/40 ratio, phosphorylated tau 181 (p-tau181), and neurofilament light chain (NfL) levels, and grouped into normal, AD-continuum, and non-AD pathologic change. Demographic, clinical, and cognitive characteristics were compared. Cognition was assessed using the International Classification of Cognitive Disorders in Epilepsy (IC-CoDE) and the Montreal Cognitive Assessment (MoCA). Memory was examined using IC-CoDE memory domain classification, with word-list delayed recall analyzed separately. Associations with cognition were modeled using logistic and linear regression. Secondary analyses examined biomarkers continuously, including p-tau217, and substituted hippocampal volume for NfL. Results: Only 32% of participants had normal biomarkers, while 37% were on the AD-continuum and 31% showed non-AD pathologic change. Participants with normal biomarkers were younger with shorter epilepsy duration, whereas APOE-{epsilon}4 carriers were enriched in the AD-continuum group. Early-onset compared to late-onset epilepsy (cutoff: [≥]55 years) showed higher odds of biomarker abnormality (aOR=8.84, 95% CI [2.35, 41.89], P=0.003), driven by elevated p-tau217, NfL, and greater amyloid burden. While categorical AT(N) profiles were not associated with cognition, higher p-tau181 levels were independently associated with lower word-list delayed recall (95% CI [-10.31, -0.86], P=0.021). Substituting hippocampal volume for NfL shifted more participants to normal profiles (48% vs. 32%) and fewer to non-AD pathologic change (15% vs. 31%). Discussion: AT(N) biomarker profiles showed substantial heterogeneity, with higher abnormality rates than in aging populations, particularly among those with early-onset epilepsy. Continuous p-tau181 was associated with memory performance while categorical AT(N) profiles were not, and NfL and hippocampal volume showed discordant classifications, highlighting divergence across neurodegeneration markers. These findings underscore the complexity of applying AD-centric frameworks to epilepsy and support multimodal, epilepsy-adapted biomarker approaches to characterize neurodegenerative risk.
Jourdan, O.; Duchiron, M.; Torrent, J.; Turpinat, C.; Mondesert, E.; Busto, G.; Morchikh, M.; Dornadic, M.; Delaby, C.; Hirtz, C.; Thizy, L.; Barnier-Figue, G.; Perrein, F.; Jurici, S.; Gabelle, A.; Bennys, K.; Lehmann, S.
Show abstract
Objectives: To evaluate the diagnostic performance of the -synuclein seed amplification assay (SAA) and characterize the impact of -synuclein co-pathology on cognitive and biological profiles in routine clinical practice. Methods: We included 398 patients from the prospective multicenter ALZAN cohort recruited from memory clinics in Montpellier, Nimes, and Perpignan. All participants underwent CSF and blood sampling with measurement of CSF biomarkers (A{beta}42/40, tau, ptau181) and plasma biomarkers (A{beta}42/40, ptau181, ptau217, GFAP, NfL). Cognitive assessment was performed using the Mini-Mental State Examination (MMSE). Clinical diagnoses were independently confirmed by two senior neurologists. Syn status was determined by SAA (RT-QuIC). Results: Of 398 patients, 19 out of 20 patients with Lewy body dementia (LBD) (95.0%) and 32 out of 203 patients with AD (15.8%) were SAA+. SAA-positivity presented a sensitivity of 95% and a specificity of 93.5% for distinguishing LBD from patients without LBD or AD. In the entire cohort, SAA+ patients showed lower MMSE scores (p<0.01), lower CSF A{beta}42/40 ratio (p<0.01), and elevated plasma GFAP (p<0.05). Within the AD group, no significant differences in CSF or blood biomarkers were observed between SAA+ and SAA- patients. Within the AD subgroup, no significant differences in CSF or blood biomarkers were observed between SAA+ and SAA- patients, except for a lower CSF A{beta}42/40 ratio in SAA+ patients (p<0.01). Interpretation: SAA demonstrates good diagnostic capabilities for detecting LBD and confirms notable Syn co-pathology in AD. This study highlights the limitations of routine CSF and emerging blood biomarkers in capturing Syn pathology and the value of integrating SAA into routine neurodegenerative disease assessment.
Khorsand, B.; Teichrow, D.; Lipton, R. B.; Ezzati, A.
Show abstract
ObjectiveTo describe the design, feasibility, and baseline characteristics of the Migraine Impact on Neurocognitive Dynamics (MIND) study, a 30-day smartphone-based cohort for high-frequency assessment of cognition and symptoms in adults with migraine. BackgroundCognitive symptoms are an important component of migraine burden, but they are difficult to measure using single-visit testing or retrospective questionnaires. Repeated smartphone-based assessment may better capture real-world variability in cognition and symptoms. MethodsAdults meeting International Classification of Headache Disorders, 3rd edition, criteria for migraine were enrolled remotely and completed 30 days of once-daily ecological momentary assessments and mobile cognitive tasks delivered through the Mobile Monitoring of Cognitive Change platform. Baseline measures assessed demographics, migraine characteristics, disability, mood, stress, and treatment patterns. Feasibility was evaluated using enrollment, completion, and retention metrics. ResultsA total of 177 participants enrolled (mean age 38.8 {+/-} 11.9 years; 79.7% female), including 80/177 (45.2%) with chronic migraine. Across the 30-day protocol, 3688 daily assessments were completed, representing 70.8% of all possible study days, and 70.6% of participants completed at least 20 days of monitoring. Completion remained above 60% across study days. At baseline, chronic migraine was associated with greater burden than low-frequency and high-frequency episodic migraine, including higher MIDAS scores (98.6 vs. 38.7 and 70.3), more days with concentration difficulty (16.0 vs. 7.9 and 11.5), and more days with functional interference (18.5 vs. 7.6 and 13.0). ConclusionsThe MIND study demonstrates the feasibility of high-frequency smartphone-based assessment of cognition and symptoms in migraine and provides a methodological foundation for future analyses of within-person cognitive and symptom dynamics across the migraine cycle.
Kancheva, I. K.; Voigt, S.; Munting, L.; van Dis, V.; Koemans, E.; van Osch, M. J. P.; Wermer, M. J. H.; Hirschler, L.; van Walderveen, M.; Weerd, L. v. d.
Show abstract
A prominent radiological manifestation of cerebral amyloid angiopathy (CAA) is enlargement of perivascular spaces (EPVS), which is suggested to result from fluid stagnation due to impaired perivascular clearance. Here, we report a novel observation of hypointense rims in cerebral white matter surrounding EPVS near haemorrhages on in vivo 7T Gradient Echo MRI. We hypothesised that the observed black rim pattern denotes iron accumulation that may be caused by incomplete clearance following bleeding. We investigated the occurrence and localisation of this marker on in vivo and ex vivo MRI and examined its histopathological correlates. From MRI data of the prospective longitudinal natural history study of hereditary Dutch-type CAA (D-CAA) at Leiden University Medical Centre, we selected the first 20 consecutive patients who underwent 7T imaging and assessed the presence of black rims on MRI. Post-mortem material was available from one donor with black rims on in vivo scans. Formalin-fixed coronal brain slabs were scanned at 7T MRI, including a high-resolution T2*-weighted sequence. Guided by ex vivo MRI, tissue blocks from representative areas with black rims were sampled for histopathological analysis. Serial sections were stained for iron, calcium, myelin, and general tissue morphology. On in vivo 7T MRI, 9 out of 20 participants exhibited one or several black rims, all located close to a haemorrhage. In the D-CAA donor, ex vivo MRI signal loss matched the in vivo contrast changes. Thirty-six vessels with ex vivo-observed black rims were retrieved and histopathologically examined, showing iron accumulation surrounding perivascular spaces, but the pattern and severity of iron deposition varied. Across groups, vessels displayed microvascular degeneration, including hyaline vessel wall thickening, adventitial fibrosis, and perivascular inflammation. We identified black rims on in vivo 7T MRI and confirmed their correspondence on ex vivo imaging. Iron deposition was determined as the underlying correlate of black rims, but the histopathology appears heterogeneous. The preferential deposition of iron around EPVS may indicate incomplete clearance of iron-positive blood-breakdown products after bleeding. The varied pattern of iron accumulation and microvascular alterations may reflect different pathophysiological mechanisms related to the formation and maintenance of black rims in D-CAA.